69 research outputs found
Improvements on the k-center problem for uncertain data
In real applications, there are situations where we need to model some
problems based on uncertain data. This leads us to define an uncertain model
for some classical geometric optimization problems and propose algorithms to
solve them. In this paper, we study the -center problem, for uncertain
input. In our setting, each uncertain point is located independently from
other points in one of several possible locations in a metric space with metric , with specified probabilities
and the goal is to compute -centers that minimize the
following expected cost here
is the probability space of all realizations of given uncertain points and
In restricted assigned version of this problem, an assignment is given for any choice of centers and the
goal is to minimize In unrestricted version, the
assignment is not specified and the goal is to compute centers
and an assignment that minimize the above expected
cost.
We give several improved constant approximation factor algorithms for the
assigned versions of this problem in a Euclidean space and in a general metric
space. Our results significantly improve the results of \cite{guh} and
generalize the results of \cite{wang} to any dimension. Our approach is to
replace a certain center point for each uncertain point and study the
properties of these certain points. The proposed algorithms are efficient and
simple to implement
Brief Announcement: Distributed Algorithms for Minimum Dominating Set Problem and Beyond, a New Approach
In this paper, we study the minimum dominating set (MDS) problem and the minimum total dominating set (MTDS) problem. We propose a new idea to compute approximate MDS and MTDS. This new approach can be implemented in a distributed model or parallel model. We also show how to use this new approach in other related problems such as set cover problem and k-distance dominating set problem
Relative Fractional Independence Number and Its Applications
We define the relative fractional independence number of two graphs, and
, as where the maximum is taken over all graphs , is the
strong product of and , and denotes the independence number. We
give a non-trivial linear program to compute and discuss some
of its properties. We show that where
can be the independence number, the zero-error Shannon capacity, the
fractional independence number, the Lov'{a}sz number, or the Schrijver's or
Szegedy's variants of the Lov'{a}sz number of a graph . This inequality is
the first explicit non-trivial upper bound on the ratio of the invariants of
two arbitrary graphs, as mentioned earlier, which can also be used to obtain
upper or lower bounds for these invariants. As explicit applications, we
present new upper bounds for the ratio of the zero-error Shannon capacity of
two Cayley graphs and compute new lower bounds on the Shannon capacity of
certain Johnson graphs (yielding the exact value of their Haemers number).
Moreover, we show that the relative fractional independence number can be used
to present a stronger version of the well-known No-Homomorphism Lemma. The
No-Homomorphism Lemma is widely used to show the non-existence of a
homomorphism between two graphs and is also used to give an upper bound on the
independence number of a graph. Our extension of the No-Homomorphism Lemma is
computationally more accessible than its original version
Use of multidimensional item response theory methods for dementia prevalence prediction : an example using the Health and Retirement Survey and the Aging, Demographics, and Memory Study
Background Data sparsity is a major limitation to estimating national and global dementia burden. Surveys with full diagnostic evaluations of dementia prevalence are prohibitively resource-intensive in many settings. However, validation samples from nationally representative surveys allow for the development of algorithms for the prediction of dementia prevalence nationally. Methods Using cognitive testing data and data on functional limitations from Wave A (2001-2003) of the ADAMS study (n = 744) and the 2000 wave of the HRS study (n = 6358) we estimated a two-dimensional item response theory model to calculate cognition and function scores for all individuals over 70. Based on diagnostic information from the formal clinical adjudication in ADAMS, we fit a logistic regression model for the classification of dementia status using cognition and function scores and applied this algorithm to the full HRS sample to calculate dementia prevalence by age and sex. Results Our algorithm had a cross-validated predictive accuracy of 88% (86-90), and an area under the curve of 0.97 (0.97-0.98) in ADAMS. Prevalence was higher in females than males and increased over age, with a prevalence of 4% (3-4) in individuals 70-79, 11% (9-12) in individuals 80-89 years old, and 28% (22-35) in those 90 and older. Conclusions Our model had similar or better accuracy as compared to previously reviewed algorithms for the prediction of dementia prevalence in HRS, while utilizing more flexible methods. These methods could be more easily generalized and utilized to estimate dementia prevalence in other national surveys
- β¦